医学图像分割模型的性能指标用于衡量参考注释和预测之间的一致性。在开发此类模型中,使用了一组通用指标,以使结果更具可比性。但是,公共数据集中的分布与临床实践中遇到的案例之间存在不匹配。许多常见的指标无法衡量这种不匹配的影响,尤其是对于包含不确定,小或空参考注释的临床数据集。因此,可能无法通过此类指标来验证模型在临床上有意义的一致性。评估临床价值的维度包括独立于参考注释量的大小,考虑参考注释的不确定性,体积计和/或位置一致性的奖励以及对空参考注释正确分类的奖励。与普通的公共数据集不同,我们的内部数据集更具代表性。它包含不确定的,小或空的参考注释。我们研究了有关深度学习框架的预测的公开度量指标,以确定哪些设置共同指标可提供有意义的结果。我们将公共基准数据集进行比较而没有不确定,小或空参考注释。该代码将发布。
translated by 谷歌翻译
主动学习是自动化机器学习系统的重要技术。与旨在自动化神经网络体系结构设计的神经体系结构搜索(NAS)相反,主动学习旨在自动化培训数据选择。对于训练长尾巴的任务尤其重要,在该任务中,在该任务中,稀疏的样品分布稀疏。主动学习通过逐步培训模型,以有效的数据选择来减轻昂贵的数据注释问题。它没有注释所有未标记的样本,而是迭代选择并注释最有价值的样本。主动学习在图像分类中很受欢迎,但在对象检测中尚未得到充分探索。当前的大多数对象检测方法都通过不同的设置进行评估,因此很难公平地比较其性能。为了促进该领域的研究,本文贡献了一个活跃的学习基准框架,称为Albench,用于评估对象检测中的主动学习。该Albench框架在自动深层模型训练系统上开发,易于使用,与不同的主动学习算法兼容,并确保使用相同的培训和测试协议。我们希望这种自动化的基准系统能够帮助研究人员轻松复制文学的表现,并与先前的艺术进行客观的比较。该代码将通过GitHub发布。
translated by 谷歌翻译
本文介绍了一种开源平台,可快速发展计算机视觉应用。该平台在机器学习开发过程的中心进行了高效的数据开发,集成了主动学习方法,数据和型号版本控制,并使用项目等概念,以便并行启用多个任务特定数据集的快速迭代。我们通过将开发过程抽象到核心状态和操作中,设计开放式平台,并设计开放API,将第三方工具集成为操作的实现。这种开放式设计降低了ML与现有工具的ML团队的开发成本和采用费用。与此同时,该平台支持录制项目开发历史记录,可以共享成功的项目,以进一步提高类似任务的模型生产效率。该平台是开源的,已经在内部使用,以满足自定义现实世界计算机视觉应用程序的日益增长的需求。
translated by 谷歌翻译
Model compression is a critical technique to efficiently deploy neural network models on mobile devices which have limited computation resources and tight power budgets. Conventional model compression techniques rely on hand-crafted heuristics and rule-based policies that require domain experts to explore the large design space trading off among model size, speed, and accuracy, which is usually sub-optimal and time-consuming. In this paper, we propose AutoML for Model Compression (AMC) which leverage reinforcement learning to provide the model compression policy. This learning-based compression policy outperforms conventional rule-based compression policy by having higher compression ratio, better preserving the accuracy and freeing human labor. Under 4× FLOPs reduction, we achieved 2.7% better accuracy than the handcrafted model compression policy for VGG-16 on ImageNet. We applied this automated, push-the-button compression pipeline to MobileNet and achieved 1.81× speedup of measured inference latency on an Android phone and 1.43× speedup on the Titan XP GPU, with only 0.1% loss of ImageNet Top-1 accuracy.
translated by 谷歌翻译
Recent deep networks are capable of memorizing the entire data even when the labels are completely random. To overcome the overfitting on corrupted labels, we propose a novel technique of learning another neural network, called Men-torNet, to supervise the training of the base deep networks, namely, StudentNet. During training, MentorNet provides a curriculum (sample weighting scheme) for StudentNet to focus on the sample the label of which is probably correct. Unlike the existing curriculum that is usually predefined by human experts, MentorNet learns a data-driven curriculum dynamically with StudentNet. Experimental results demonstrate that our approach can significantly improve the generalization performance of deep networks trained on corrupted training data. Notably, to the best of our knowledge, we achieve the best-published result on We-bVision, a large benchmark containing 2.2 million images of real-world noisy labels. The code are at https://github.com/google/mentornet.
translated by 谷歌翻译
We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms. Our approach uses a sequential model-based optimization (SMBO) strategy, in which we search for structures in order of increasing complexity, while simultaneously learning a surrogate model to guide the search through structure space. Direct comparison under the same search space shows that our method is up to 5 times more efficient than the RL method of Zoph et al. (2018) in terms of number of models evaluated, and 8 times faster in terms of total compute. The structures we discover in this way achieve state of the art classification accuracies on CIFAR-10 and ImageNet.
translated by 谷歌翻译
Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in
translated by 谷歌翻译
The photograph and our understanding of photography is ever changing and has transitioned from a world of unprocessed rolls of C-41 sitting in a fridge 50 years ago to sharing photos on the 1.5" screen of a point and shoot camera 10 years back. And today the photograph is again something different. The way we take photos is fundamentally different. We can view, share, and interact with photos on the device they were taken on. We can edit, tag, or "filter" photos directly on the camera at the same time the photo is being taken. Photos can be automatically pushed to various online sharing services, and the distinction between photos and videos has lessened. Beyond this, and more importantly, there are now lots of them. To Facebook alone more than 250 billion photos have been uploaded and on average it receives over 350 million new photos every day [6], while YouTube reports that 300 hours of video are uploaded every minute [22]. A back of the envelope estimation reports 10% of all photos in the world were taken in the last 12 months, and that was calculated already more than three years ago [8].Today, a large number of the digital media objects that are shared have been uploaded to services like Flickr or Instagram, which along with their metadata and their social ecosystem form a vibrant environment for finding solutions to many research questions at scale. Photos and videos provide a wealth of information about the universe, covering entertainment, travel, personal records, and various other aspects of life in general as it was when they were taken. Considered collectively, they represent knowledge that goes * This work was done while Benjamin Elizalde was at ICSI.† This work was done while Karl Ni was at LLNL. ‡ This work was done while Damian Borth was at ICSI. § This work was done while Li-Jia Li was at Yahoo Labs.
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译